18 research outputs found

    An Aggregation-Based Algebraic Multigrid Method with Deflation Techniques and Modified Generic Factored Approximate Sparse Inverses

    Get PDF
    In this paper, we examine deflation-based algebraic multigrid methods for solving large systems of linear equations. Aggregation of the unknown terms is applied for coarsening, while deflation techniques are proposed for improving the rate of convergence. More specifically, the V-cycle strategy is adopted, in which, at each iteration, the solution is computed by initially decomposing it utilizing two complementary subspaces. The approximate solution is formed by combining the solution obtained using multigrids and deflation. In order to improve performance and convergence behavior, the proposed scheme was coupled with the Modified Generic Factored Approximate Sparse Inverse preconditioner. Furthermore, a parallel version of the multigrid scheme is proposed for multicore parallel systems, improving the performance of the techniques. Finally, characteristic model problems are solved to demonstrate the applicability of the proposed schemes, while numerical results are given

    Simulating fog and edge computing scenarios: an overview and research challenges

    Get PDF
    The fourth industrial revolution heralds a paradigm shift in how people, processes, things, data and networks communicate and connect with each other. Conventional computing infrastructures are struggling to satisfy dramatic growth in demand from a deluge of connected heterogeneous endpoints located at the edge of networks while, at the same time, meeting quality of service levels. The complexity of computing at the edge makes it increasingly difficult for infrastructure providers to plan for and provision resources to meet this demand. While simulation frameworks are used extensively in the modelling of cloud computing environments in order to test and validate technical solutions, they are at a nascent stage of development and adoption for fog and edge computing. This paper provides an overview of challenges posed by fog and edge computing in relation to simulation

    A preliminary systematic review of computer science literature on cloud computing research using open source simulation platforms.

    Get PDF
    Research and experimentation on live hyperscale clouds is limited by their scale, complexity, value and and issues of commercial sensitivity. As a result, there has been an increase in the development, adaptation and extension of cloud simulation platforms for cloud computing to enable enterprises, application developers and researchers to undertake both testing and experimentation. While there have been numerous surveys of cloud simulation platforms and their features, few surveys examine how these cloud simulation platforms are being used for research purposes. This paper provides a preliminary systematic review of literature on this topic covering 256 papers from 2009 to 2016. The paper aims to provide insights into the current status of cloud computing research using open source cloud simulation platforms. Our two-level analysis scheme includes a descriptive and synthetic analysis against a highly cited taxonomy of cloud computing. The analysis uncovers some imbalances in research and the need for a more granular and refined taxonomy against which to classify cloud computing research using simulators. The paper can be used to guide literature reviews in the area and identifies potential research opportunities for cloud computing and simulation researchers, complementing extant surveys on cloud simulation platforms

    Towards simulation and optimization of cache placement on large virtual Content Distribution Networks

    Get PDF
    IP video traffic is forecast to be 82% of all IP traffic by 2022. Traditionally, Content Distribution Networks (CDN) were used extensively to meet the quality of service levels for IP video services. To handle the dramatic growth in video traffic, CDN operators are migrating their infrastructure to the cloud and fog in order to leverage its greater availability and flexibility. For hyper-scale deployments, energy consumption, cache placement, and resource availability can be analyzed using simulation in order to improve resource utilization and performance. Recently, a discrete-time simulator for modelling hierarchical virtual CDNs (vCDNs) was proposed with reduced memory requirements and increased performance using multi-core systems to cater to the scale and complexity of these networks. The first iteration of this discrete-time simulator featured a number of limitations impacting accuracy and applicability: it supports only tree-based topology structures, the results are computed per level, and requests of the same content differ only in time duration. In this paper, we present an improved simulation framework that (a) supports graph-based network topologies, (b) requests have been reconstituted for differentiation of requirements, and (c) statistics are now computed per site and network metrics per link, improving the granularity and parallel performance. Moreover, we also propose a two-phase optimization scheme that makes use of simulation outputs to guide the search for optimal cache placements. In order to evaluate our proposal, we simulate a vCDN network based on real traces obtained from the BT vCDN infrastructure and analyze performance and scalability aspects

    Heterogeneity, high performance computing, self-organization and the Cloud

    Get PDF
    This open access book addresses the most recent developments in cloud computing such as HPC in the Cloud, heterogeneous cloud, self-organising and self-management, and discusses the business implications of cloud computing adoption. Establishing the need for a new architecture for cloud computing, it discusses a novel cloud management and delivery architecture based on the principles of self-organisation and self-management. This focus shifts the deployment and optimisation effort from the consumer to the software stack running on the cloud infrastructure. It also outlines validation challenges and introduces a novel generalised extensible simulation framework to illustrate the effectiveness, performance and scalability of self-organising and self-managing delivery models on hyperscale cloud infrastructures. It concludes with a number of potential use cases for self-organising, self-managing clouds and the impact on those businesses

    Studying a digital business ecosystem

    No full text
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    On the Optimization of Self-Organization and Self-Management Hardware Resource Allocation for Heterogeneous Clouds

    No full text
    There is a tendency, during the last years, to migrate from the traditional homogeneous clouds and centralized provisioning of resources to heterogeneous clouds with specialized hardware governed in a distributed and autonomous manner. The CloudLightning architecture proposed recently introduced a dynamic way to provision heterogeneous cloud resources, by shifting the selection of underlying resources from the end-user to the system in an efficient way. In this work, an optimized Suitability Index and assessment function are proposed, along with their theoretical analysis, for improving the computational efficiency, energy consumption, service delivery and scalability of the distributed orchestration. The effectiveness of the proposed scheme is being evaluated with the use of simulation, by comparing the optimized methods with the original approach and the traditional centralized resource management, on real and synthetic High Performance Computing applications. Finally, numerical results are presented and discussed regarding the improvements over the defined evaluation criteria

    ON THE RATE OF CONVERGENCE AND COMPLEXITY OF NORMALIZED IMPLICIT PRECONDITIONING FOR SOLVING FINITE DIFFERENCE EQUATIONS IN THREE SPACE VARIABLES

    No full text
    Normalized approximate factorization procedures for solving sparse linear systems, which are derived from the finite difference method of partial differential equations in three space variables, are presented. Normalized implicit preconditioned conjugate gradient-type schemes in conjunction with normalized approximate factorization procedures are presented for the efficient solution of sparse linear systems. The convergence analysis with theoretical estimates on the rate of convergence and computational complexity of the normalized implicit preconditioned conjugate gradient method are also given. Application of the proposed method on characteristic three dimensional boundary value problems is discussed and numerical results are given

    Parallel Preconditioned Conjugate Gradient Square Method Based on Normalized Approximate Inverses

    No full text
    A new class of normalized explicit approximate inverse matrix techniques, based on normalized approximate factorization procedures, for solving sparse linear systems resulting from the finite difference discretization of partial differential equations in three space variables are introduced. A new parallel normalized explicit preconditioned conjugate gradient square method in conjunction with normalized approximate inverse matrix techniques for solving efficiently sparse linear systems on distributed memory systems, using Message Passing Interface (MPI) communication library, is also presented along with theoretical estimates on speedups and efficiency. The implementation and performance on a distributed memory MIMD machine, using Message Passing Interface (MPI) is also investigated. Applications on characteristic initial/boundary value problems in three dimensions are discussed and numerical results are given
    corecore